Why the Pentagon is at war with Anthropic over AI use
Surveillance fears, autonomous weapons and a legal showdown reshape military AI rules
Why the Pentagon is at war with Anthropic over AI use

A high-stakes dispute over military use of artificial intelligence erupted into public view this week as Defence Secretary Pete Hegseth brusquely terminated the Pentagon's work with Anthropic and other government agencies, using a law designed to counter foreign supply chain threats to slap a scarlet letter on a US company.
President Donald Trump and Hegseth accused rising AI star Anthropic of endangering national security after its CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance or autonomous armed drones.
The San Francisco-based company has vowed to sue over Hegseth's call to designate Anthropic a supply chain risk, an unprecedented move to apply a law intended to counter foreign threats to a US company. Anthropic said it would challenge what it called a legally unsound action "never before publicly applied to an American company."
The looming legal battle could have huge implications on the balance of power in Big Tech during a critical juncture, as well as the rules governing military use of AI and other guardrails that are set up to prevent a technology from posing threats to human life.
The dustup already has resulted in a coup for ChatGPT maker OpenAI, which seized upon an opportunity to step into the void to make its technology available to the Pentagon after Anthropic objected to some of the Trump administration's terms.
It's a turn of events likely to deepen the animosity between OpenAI CEO Sam Altman, who was temporarily ousted by his own board in late 2023 over questions about his trustworthiness, and Amodei, who left OpenAI in 2021 to launch Anthropic partly because of concerns about AI safety.
Implications of being designated a supply chain risk
The Department of Defense's move to label Anthropic a risk to the nation's defense supply chain will end its up to USD 200 million contract with the AI company. It will also, according to the Pentagon, prohibit other defense contractors from doing business with Anthropic.
Trump wrote on Truth Social that most government agencies must immediately stop using Anthropic's AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms. Anthropic argues that Hegseth doesn't have the legal authority to stop business relationships with other defense contractors.
Any company that still holds a commercial contract with Anthropic can continue to use its products for non-defense projects, the company wrote in a statement. The supply chain risk designation was created to give American military leaders a way to limit the Pentagon's exposure to companies posing a potential security risk.
The list has typically included firms with ties to adversaries, such as telecom giant Huawei, which has links to China, or cybersecurity specialist Kaspersky, which has links to Russia. In the case of Anthropic, the designation serves as a warning to other AI and defense companies: Fail to meet our demands and you will be blacklisted.
"We don't need it, we don't want it, and will not do business with them again!" Trump said on social media. Trump's six-month grace period for the Pentagon essentially opens a window for other companies to get the classified security clearances that are needed to work with the agency.
How the standoff affects Anthropic's business
Anthropic says it has yet to be formally notified of Hegseth's designation. "When we receive some kind of formal action, we will look at it, we will understand it and we will challenge it in court," Amodei vowed during an interview with CBS News that will be aired Sunday morning.
For now, Anthropic is trying to convince the businesses and government agencies that that the Trump administration's supply chain risk designation only affects the usage of Claude, its AI chatbot and computer coding agent, for military contractors when they are using the tool on work for Department of Defense work.
"Your use for any other purpose is unaffected," Anthropic wrote in its statement. Making that distinction clear is crucial for Anthropic because most of its projected USD 14 billion in revenue this year comes from businesses and government agencies that are using Claude for computer coding and other tasks.
More than 500 customers are paying Anthropic at least USD 1 million annually for Claude, according to a announcement disclosing an investment that had valued the company at USD 380 billion.

